15 research outputs found

    Continuous close-range 3D object pose estimation

    Full text link
    In the context of future manufacturing lines, removing fixtures will be a fundamental step to increase the flexibility of autonomous systems in assembly and logistic operations. Vision-based 3D pose estimation is a necessity to accurately handle objects that might not be placed at fixed positions during the robot task execution. Industrial tasks bring multiple challenges for the robust pose estimation of objects such as difficult object properties, tight cycle times and constraints on camera views. In particular, when interacting with objects, we have to work with close-range partial views of objects that pose a new challenge for typical view-based pose estimation methods. In this paper, we present a 3D pose estimation method based on a gradient-ascend particle filter that integrates new observations on-the-fly to improve the pose estimate. Thereby, we can apply this method online during task execution to save valuable cycle time. In contrast to other view-based pose estimation methods, we model potential views in full 6- dimensional space that allows us to cope with close-range partial objects views. We demonstrate the approach on a real assembly task, in which the algorithm usually converges to the correct pose within 10-15 iterations with an average accuracy of less than 8mm

    SkiROS:A four tiered architecture for task-level programming of industrial mobile manipulators

    Get PDF
    During the last decades, the methods for intuitive task level programming of robots have become a fundamental point of interest for industrial application. The paper in hand presents SkiROS (Skill-based Robot Operating System) a novel software architecture based on the skills paradigm. The skill paradigm has already been used and tested within the FP7 project TAPAS, and we are going to use it in several new FP7 projects (CARLOS, STAMINA, ACAT). It facilitates task-level programming of mobile manipulators by providing the robot with a set of movement primitives, skills and tasks. This hierarchy brings many advantages, where the most relevant is the separation of control in the layers of hardware abstraction(proxy), multi-sensory control(primitive), object-level abstraction (skill) and planning (task). The definition and the clear division in different abstraction levels allows the implementation of a flexible, highly modular system for the development of cognitive robot tasks

    Real-Time Image Segmentation Using a Fixation-Based Approach

    No full text

    Fast view-based pose estimation of industrial objects in point clouds using a particle filter with an ICP-based motion model

    No full text
    The registration of an observed point set to a known model to estimate its 3D pose is a common task for the autonomous manipulation of objects. Especially in industrial environments, robotic systems need to accurately estimate the pose of objects in order to successfully perform picking, placing or assembly tasks. However, the characteristics of industrial objects often cause difficulties for classical pose estimation algorithms, especially when using IR depth sensors. In this work, we propose to solve ambiguities of the pose estimate by representing the it as a virtual view on a reference model within an adapted particle filter system. Therefore, a simple but fast method to cast views from the reference model is presented, making a training phase obsolete while increasing the accuracy of the estimate. The view-based approach increases the robustness of the registration process and reformulates the pose estimation as a problem of determining the most likely view using a particle filter. By incorporating a local optimizer (ICP) into the dynamics model of the particle filter, the proposed method directs the search in the 6-dimensional pose space, thereby reducing the amount of needed particles to about 50 while decreasing the convergence time to a minimum and therefore making it viable for real-time pose estimation. In contrast to other pose estimation methods, this approach explores the possibilities of sequential pose estimation by only using plain point clouds without additional features

    Continuous hand-eye calibration using 3D points

    No full text
    The recent development of calibration algorithms has been driven into two major directions: (1) an increasing accuracy of mathematical approaches and (2) an increasing flexibility in usage by reducing the dependency on calibration objects. These two trends, however, seem to be contradictory since the overall accuracy is directly related to the accuracy of the pose estimation of the calibration object and therefore demanding large objects, while an increased flexibility leads to smaller objects or noisier estimation methods. The method presented in this paper aims to resolves this problem in two steps: First, we derive a simple closed-form solution with a shifted focus towards the equation of translation that only solves for the necessary hand-eye transformation. We show that it is superior in accuracy and robustness compared to traditional approaches. Second, we decrease the dependency on the calibration object to a single 3D-point by using a similar formulation based on the equation of translation which is much less affected by the estimation error of the calibration object's orientation. Moreover, it makes the estimation of the orientation obsolete while taking advantage of the higher accuracy and robustness from the first solution, resulting in a versatile method for continuous hand-eye calibration

    Extended behavior trees for quick definition of flexible robotic tasks

    No full text
    he requirement of flexibility in the modern industries demands robots that can be efficiently and quickly adapted to different tasks. A way to achieve such a flexible programming paradigm is to instruct robots with task goals and leave planning algorithms to deduct the correct sequence of actions to use in the specific context. A common approach is to connect the skills that realize a semantically defined operation in the planning domain - such as picking or placing an object - to specific executable functions. As a result the skills are treated as independent components, which results into suboptimal execution. In this paper we present an approach where the execution procedures and the planning domain are specified at the same time using solely extended Behavior Trees (eBT), a model formalized and discussed in this paper. At run-time, the robot can use the more abstract skills to plan a sequence using a PDDL planner, expand the sequence into a hierarchical tree, and re-organize it to optimize the time of execution and the use of resources. The optimization is demonstrated on a kitting operation in both simulation and lab environment, showing up to 20% save in the final execution time

    Motion Generators Combined with Behavior Trees:A Novel Approach to Skill Modelling

    No full text
    Task level programming based on skills has often been proposed as a mean to decrease programming complexity of industrial robots. Several models are based on encapsulating complex motions into self-contained primitive blocks. A semantic skill is then defined as a deterministic sequence of these primitives. A major limitation is that existing frameworks do not support the coordination of concurrent motion primitives with possible interference. This decreases their reusability and scalability in unstructured environments where a dynamic and reactive adaptation of motions is often required. This paper presents a novel framework that generates adaptive behaviors by modeling skills as concurrent motion primitives activated dynamically when conditions trigger. The approach exploits the additive property of motion generators to superpose multiple contributions. We demonstrate the applicability on a real assembly use-case and discuss the gained benefits
    corecore